7 research outputs found
Spread and defend infection in graphs
The spread of an infection, a contagion, meme, emotion, message and various
other spreadable objects have been discussed in several works. Burning and
firefighting have been discussed in particular on static graphs. Graph burning
simulates the notion of the spread of "fire" throughout a graph (plus, one
unburned node burned at each time-step); graph firefighting simulates the
defending of nodes by placing firefighters on the nodes which have not been
already burned while the fire is being spread (started by only a single fire
source).
This article studies a combination of firefighting and burning on a graph
class which is a variation (generalization) of temporal graphs. Nodes can be
infected from "outside" a network. We present a notion of both upgrading (of
unburned nodes, similar to firefighting) and repairing (of infected nodes). The
nodes which are burned, firefighted, or repaired are chosen probabilistically.
So a variable amount of nodes are allowed to be infected, upgraded and repaired
in each time step.
In the model presented in this article, both burning and firefighting proceed
concurrently, we introduce such a system to enable the community to study the
notion of spread of an infection and the notion of upgrade/repair against each
other. The graph class that we study (on which, these processes are simulated)
is a variation of temporal graph class in which at each time-step,
probabilistically, a communication takes place (iff an edge exists in that time
step). In addition, a node can be "worn out" and thus can be removed from the
network, and a new healthy node can be added to the network as well. This class
of graphs enables systems with high complexity to be able to be simulated and
studied
Multiplication and Modulo are Lattice Linear
In this paper, we analyze lattice linearity of multiplication and modulo
operations. We demonstrate that these operations are lattice linear and the
parallel processing algorithms that we study for both these operations are able
to exploit the lattice linearity of their respective problems. This implies
that these algorithms can be implemented in asynchronous environments, where
the nodes are allowed to read old information from each other and are still
guaranteed to converge within the same time complexity. These algorithms also
exhibit properties similar to snap-stabilization, i.e., starting from an
arbitrary state, the system follows the trace strictly according to its
specification
Lattice Linear Problems vs Algorithms
Modelling problems using predicates that induce a partial order among global
states was introduced as a way to permit asynchronous execution in
multiprocessor systems. A key property of such problems is that the predicate
induces one lattice in the state space which guarantees that the execution is
correct even if nodes execute with old information about their neighbours.
Thus, a compiler that is aware of this property can ignore data dependencies
and allow the application to continue its execution with the available data
rather than waiting for the most recent one. Unfortunately, many interesting
problems do not exhibit lattice linearity. This issue was alleviated with the
introduction of eventually lattice linear algorithms. Such algorithms induce a
partial order in a subset of the state space even though the problem cannot be
defined by a predicate under which the states form a partial order.
This paper focuses on analyzing and differentiating between lattice linear
problems and algorithms. It also introduces a new class of algorithms called
(fully) lattice linear algorithms. A characteristic of these algorithms is that
the entire reachable state space is partitioned into one or more lattices and
the initial state locks into one of these lattices. Thus, under a few
additional constraints, the initial state can uniquely determine the final
state. For demonstration, we present lattice linear self-stabilizing algorithms
for minimal dominating set and graph colouring problems, and a parallel
processing 2-approximation algorithm for vertex cover.
The algorithm for minimal dominating set converges in n moves, and that for
graph colouring converges in n+2m moves. The algorithm for vertex cover is the
first lattice linear approximation algorithm for an NP-Hard problem; it
converges in n moves.
Some part is cut due to 1920 character limit. Please see the pdf for full
abstract.Comment: arXiv admin note: text overlap with arXiv:2209.1470
Lattice Linearity in Assembling Myopic Robots on an Infinite Triangular Grid
In this paper, we study the problem of gathering distance-1 myopic robots on
an infinite triangular grid. We show that the algorithm developed by Goswami et
al. (SSS, 2022) is lattice linear. This implies that a distributed scheduler,
assumed therein, is not required for this algorithm: it runs correctly in
asynchrony. It also implies that the algorithm works correctly even if the
robots are equipped with a unidirectional \textit{camera} to see the
neighbouring robots (rather than an omnidirectional one, which would be
required under a distributed scheduler). Due to lattice linearity, we can
predetermine the point of gathering. We also show that this algorithm converges
in rounds, which is lower than that ( rounds) shown in Goswami
et al.Comment: arXiv admin note: text overlap with arXiv:2302.0720
Technical Report: Using Static Analysis to Compute Benefit of Tolerating Consistency
Synchronization is the Achilles heel of concurrent programs. Synchronization
requirement is often used to ensure that the execution of the concurrent
program can be serialized. Without synchronization requirement, a program
suffers from consistency violations. Recently, it was shown that if programs
are designed to tolerate such consistency violation faults (\cvf{s}) then one
can obtain substantial performance gain. Previous efforts to analyze the effect
of \cvf-tolerance are limited to run-time analysis of the program to determine
if tolerating \cvf{s} can improve the performance. Such run-time analysis is
very expensive and provides limited insight.
In this work, we consider the question, `Can static analysis of the program
predict the benefit of \cvf-tolerance?' We find that the answer to this
question is affirmative. Specifically, we use static analysis to evaluate the
cost of a \cvf and demonstrate that it can be used to predict the benefit of
\cvf-tolerance. We also find that when faced with a large state space, partial
analysis of the state space (via sampling) also provides the required
information to predict the benefit of \cvf-tolerance. Furthermore, we observe
that the \cvf-cost distribution is exponential in nature, i.e., the probability
that a \cvf has a cost of is , where and are constants,
i.e., most \cvf{s} cause no/low perturbation whereas a small number of \cvf{s}
cause a large perturbation. This opens up new aveneus to evaluate the benefit
of \cvf-tolerance
Evaluation of a quality improvement intervention to reduce anastomotic leak following right colectomy (EAGLE): pragmatic, batched stepped-wedge, cluster-randomized trial in 64 countries
Background
Anastomotic leak affects 8 per cent of patients after right colectomy with a 10-fold increased risk of postoperative death. The EAGLE study aimed to develop and test whether an international, standardized quality improvement intervention could reduce anastomotic leaks.
Methods
The internationally intended protocol, iteratively co-developed by a multistage Delphi process, comprised an online educational module introducing risk stratification, an intraoperative checklist, and harmonized surgical techniques. Clusters (hospital teams) were randomized to one of three arms with varied sequences of intervention/data collection by a derived stepped-wedge batch design (at least 18 hospital teams per batch). Patients were blinded to the study allocation. Low- and middle-income country enrolment was encouraged. The primary outcome (assessed by intention to treat) was anastomotic leak rate, and subgroup analyses by module completion (at least 80 per cent of surgeons, high engagement; less than 50 per cent, low engagement) were preplanned.
Results
A total 355 hospital teams registered, with 332 from 64 countries (39.2 per cent low and middle income) included in the final analysis. The online modules were completed by half of the surgeons (2143 of 4411). The primary analysis included 3039 of the 3268 patients recruited (206 patients had no anastomosis and 23 were lost to follow-up), with anastomotic leaks arising before and after the intervention in 10.1 and 9.6 per cent respectively (adjusted OR 0.87, 95 per cent c.i. 0.59 to 1.30; P = 0.498). The proportion of surgeons completing the educational modules was an influence: the leak rate decreased from 12.2 per cent (61 of 500) before intervention to 5.1 per cent (24 of 473) after intervention in high-engagement centres (adjusted OR 0.36, 0.20 to 0.64; P < 0.001), but this was not observed in low-engagement hospitals (8.3 per cent (59 of 714) and 13.8 per cent (61 of 443) respectively; adjusted OR 2.09, 1.31 to 3.31).
Conclusion
Completion of globally available digital training by engaged teams can alter anastomotic leak rates. Registration number: NCT04270721 (http://www.clinicaltrials.gov)